Comparisons of Measurement Results as Constraints on Accuracies of Measuring Instruments: When Can We Determine the Accuracies from These Constraints?

نویسندگان

  • Christian Servin
  • Vladik Kreinovich
چکیده

For a measuring instrument, a usual way to find the probability distribution of its measurement errors is to compare its results with the results of measuring the same quantity with a much more accurate instrument. But what if we are interested in estimating the measurement accuracy of a state-of-the-art measuring instrument, for which no more accurate instrument is possible? In this paper, we show that while in general, such estimation is not possible; however, can uniquely determine the corresponding probability distributions if we have several state-of-the-art measuring instruments, and for one of them, the corresponding probability distribution is symmetric. 1 Formulation of the Problem Need to determine accuracies of measurement instruments. Most information comes from measurements. Measurement results are never absolutely accurate: the measurement result x̃ is, in general, different from the actual (unknown) value x of the corresponding quantity; see, e.g., [7]. To properly process data, it is therefore important to know how accurate are our measurements. Ideally, we would like to know what are the possible values of measurement errors ∆x def = x̃ − x, and how frequent are different possible values of ∆x. In other words, we would like to know the probability distribution on the set of all possible values of the measurement error ∆x. How accuracies are usually determined: by using a second, much more accurate measuring instrument. One usual way to find the desired probability distribution is to have a second measuring instrument which is much more accurate than the one that we want to estimate. In this case, the measurement error ∆x2 = x̃2 − x of this second instrument is much smaller than ∆x = x̃− x and thus, the difference x̃ − x̃2 = (x̃ − x) − (x̃2 − x) between the two measurement results can serve as a good approximation to the measurement error. From the sample of such differences, we can therefore find the desired probability distribution for ∆x. What if we do not have a more accurate measuring instrument? But what if the measuring instrument whose accuracy we want to estimate is among the best? In this case, we do not have a much more accurate measuring instrument. What can we do in this case? In such situations, we can use the fact that there usually, there are several measuring instrument of the type that we want to analyze. Due to measurement errors, for the same quantity, these instruments, in general, produce slightly different measurement results. It is therefore desirable to try to extract the information about measurement accuracies from the differences between these measurement results. Two possible situations. In some cases, we have a stable manufacturing process that produces several practical identical measuring instruments, for which the probability distributions of measurement error are the same. In such cases, all we need to find is this common probability distribution. In other cases, we cannot ignore the differences between different instruments. In this case, for each individual measuring instrument, we need to find its own probability distribution. What is known: case of normal distribution. In many practical situations, the measurement error is caused by the joint effect of numerous independent small factors. In such situations, the Central Limit Theorem (see, e.g., [9]) implies that this distribution is close to Gaussian. A Gaussian distribution is uniquely determined by its mean (bias) and standard deviation σ. When we only know the differences, we cannot determine the bias: it could be that all the measuring instruments have the same bias, and we will never determine that since we only see the differences. Thus, it makes sense to limit ourselves only to the random component of the measurement error, i.e., to the measurement error minus its mean value. For this “re-normalized” measurement error ∆x, the mean is 0. So, all we need to determine is the standard deviation σ. These standard deviations can indeed be determined; see, e.g., [4, 8]. Specifically, hen we have two identical independent measuring instruments, with normally distributed measurement errors ∆x1 and ∆x2, then the difference x̃2 − x̃1 is also normally distributed, with variance V = σ + σ = 2σ. Thus, once we experimentally determine the variance V of this observable difference, we can compute the desired variance σ as σ = V 2 . When we have several different measuring instruments, with unknown standard deviations σ1, σ2, σ3, . . . , then for each observable difference x̃i − x̃j the variance is equal to Vij = σ 2 i + σ 2 j . Thus, once we experimentally determine the three variances V12, V23, and V13, we can find the desired standard deviations by solving the corresponding system of three equations with three unknowns: V12 = σ 2 1 + σ 2 2 , V23 = σ 2 2 + σ 2 3 , and V13 = σ 2 1 + σ 2 3 , whose solution is: σ 1 = V12 + V13 − V23 2 , σ 2 = V12 + V23 − V13 2 , σ 3 = V13 + V23 − V12 2 . Problem: what if distributions are not Gaussian? Empirical analysis of measuring instruments shows that only slightly more than a half of them have Gaussian measurement errors [3, 6]. What happens in the non-Gaussian case? In such cases, sometimes, we simply cannot uniquely reconstruct the corresponding distributions; see, e.g., [8]. In this paper, we explain when such a reconstruction is possible and when it is not possible. 2 Idea: Let Us Use Moments Motivation for using moments. As we have mentioned, a Gaussian distribution with zero mean is uniquely determined by its second moment M2 = σ . This means that all higher moments Mk def = E[(∆x)] are uniquely determined by the value M2. In general, we may have values of Mk which are different from the corresponding Gaussian values. Thus, to describe a general distribution, in addition to the second moment, we also need to describe its higher moments as well. Moments are sufficient to uniquely describe a distribution: reminder. But even if we know all the moments, will it be sufficient to uniquely determine the corresponding probability distribution? The answer is yes, it is possible, and let us provide a simple reminder of why it is possible – and how can we reconstruct the corresponding distribution. The usual way to represent a probability distribution of a random variable∆x is by describing its probability density function (pdf) ρ(∆x). In many situations, it is convenient to use its characteristic function χ(ω) def = E[exp(i · ω ·∆x)], where i def = √ −1, i.e., χ(ω) = ∫ ρ(∆x) · exp(i · ω ·∆x) d∆x. From the mathematical viewpoint, the characteristic function is the Fourier transform of the pdf, and it is known that we can uniquely reconstruct a function from its Fourier transform (this reconstruction is known as the inverse Fourier transform); see, e.g., [1, 2, 5, 10]. On the other hand, if we use Taylor expansion of the exponential function exp(z) = 1 + z + z 2! + z 3! + . . .+ z k! + . . . , then the characteristic functions takes the form χ(ω) = E [ 1 + i · ω ·∆x− 1 2! · ω · (∆x) + . . .+ i k k! · ω · (∆x) + . . . ] , i.e., χ(ω) = 1− 1 2 · ω ·M2 + . . .+ i k! · ω ·Mk + . . . Thus, if we know all the moments Mk, we can uniquely reconstruct the characteristic function and thus, uniquely reconstruct the desired pdf. Important fact: for a symmetric distribution, odd moments are zeros. In the following analysis, it is important to use the fact that for a symmetric distribution, i.e., a distribution for which ρ(−∆x) = ρ(∆x), add odd moments M2s+1 are equal to 0: M2s+1 = ∫ ρ(∆x) · (∆x) d∆x. Indeed, if we replace ∆x to ∆x′ def = −∆x, then d∆x = −d∆x′, (∆x) = −(∆x′)2s+1 and thus, the above integral takes the form M2s+1 = − ∫ ρ(−∆x′) · (∆x′)2s+1 d∆x′ = − ∫ ρ(∆x′) · (∆x′)2s+1 d∆x′, so M2s+1 = −M2s+1 and hence, M2s+1 = 0. 3 Case When Have Several Identical Measuring Instruments Description of the case: reminder. In this cases, we have several measuring instruments, with the same probability distribution and thus, with the same moments M2, M3, etc. The only available information consists of the differences ∆x1−∆x2 = x̃1− x̃2. Based on the observations, we can determine the probability distribution for each such difference, and thus, we can determine the moments M ′ k of this difference. We would like to use these observable moments M ′ k = E[(∆x1 −∆x2)] to find the desired differences Mk = E[(∆x) ]. What is known: case of second moments. For k = 2, we have M ′ 2 = 2M2 and thus, we can uniquely reconstruct the desired second moment M2 from the observed second moment M ′ 2. Natural next case: third moments. Can we similarly reconstruct the desired third moment M3 = E[(∆x) ] based on the observed third moment M ′ 3 = E[(∆x1 −∆x2)]? Here, (∆x1 −∆x2) = (∆x1) − 3 · (∆x1) ·∆x2 + 3 ·∆x1 · (∆x2) − (∆x2), so, due to linearity of the mean and to the fact that the measurement errors ∆x1 and ∆x2 corresponding to two measuring instruments are assumed to be independent, we conclude that M ′ 3 = E[(∆x1 −∆x2)] = E[(∆x1)]− 3 · E[(∆x1)] · E[∆x2]+ 3 · E[∆x1] · E[(∆x2)]− E[(∆x2)]. In this case, E[∆xi] = 0 and E(∆x1) ] = E[(∆x2) ] = M3, so M ′ 3 = M3 −M3 = 0. In other words, the observed third moment M ′ 3 is always equal to 0, and thus, carries no information about M3. So, the only case when we can reconstruct M3 is when we know it already. One such case is when we know that the distribution is symmetric. In turns out that in this case, we can reconstruct all the moments and thus, we can uniquely reconstruct the original probability distribution. When the probability distribution of the measurement error is symmetric, this distribution can be uniquely determined from the observed differences. For a symmetric distribution, all odd moments are equal to 0. Thus, to uniquely determine a symmetric distribution, it is sufficient to determine all its even moments M2s. Let us prove, by induction, that we can reconstruct all these even moments. We already know that we can reconstruct M2. Let us assume that we already know how to reconstruct the moments M2, . . . , M2s. Let us show how to reconstruct the next moment M2s+2 = E[(∆x) ]. For this, we will use the observed moment M ′ 2s+2 = E[(∆x1 −∆x2)]. Here, (∆x1 −∆x2) = (∆x1) − (2s+ 2) · (∆x1) ·∆x2+ (2s+ 2) · (2s+ 1) 1 · 2 · (∆x1) · (∆x2) − . . .+ (2s+ 2) · (2s+ 1) 1 · 2 · (∆x1) · (∆x2) − (2s+ 2) ·∆x1 · (∆x2) + (∆x2).

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Weighted Ensemble Clustering for Increasing the Accuracy of the Final Clustering

Clustering algorithms are highly dependent on different factors such as the number of clusters, the specific clustering algorithm, and the used distance measure. Inspired from ensemble classification, one approach to reduce the effect of these factors on the final clustering is ensemble clustering. Since weighting the base classifiers has been a successful idea in ensemble classification, in th...

متن کامل

Application of Network RTK Positions and Geometric Constraints to the Problem of Attitude Determination Using the GPS Carrier Phase Measurements

Nowadays, navigation is an unavoidable fact in military and civil aerial transportations. The Global Positioning System (GPS) is commonly used for computing the orientation or attitude of a moving platform. The relative positions of the GPS antennas are computed using the GPS code and/or phase measurements. To achieve a precise attitude determination, Carrier phase observations of GPS requiring...

متن کامل

Linear plus fractional multiobjective programming problem with homogeneous constraints using fuzzy approach

  We develop an algorithm for the solution of multiobjective linear plus fractional programming problem (MOL+FPP) when some of the constraints are homogeneous in nature. Using homogeneous constraints, first we construct a transformation matrix T which transforms the given problem into another MOL+FPP with fewer constraints. Then, a relationship between these two problems, ensuring that the solu...

متن کامل

Compatibility of Aircraft and Shipborne Instruments Used in Air-sea Interaction Research

On June 16, 1966, a n experiment was performed off the east coast of Florida tha t involved two research aircraft, one from the Naval Oceanographic Office and one from ESSA's Research Flight Facility, and the USCGSS Peirce, aboard which were two scientists from ESSA's Sea Air Interaction Laboratory, and the Weather Bureau Airport Station at Jacksonville, Fla. The purpose of this investigation w...

متن کامل

Design and Evaluation of a Method for Partitioning and Offloading Web-based Applications in Mobile Systems with Bandwidth Constraints

Computation offloading is known to be among the effective solutions of running heavy applications on smart mobile devices. However, irregular changes of a mobile data rate have direct impacts on code partitioning when offloading is in progress. It is believed that once a rate-adaptive partitioning performed, the replication of such substantial processes due to bandwidth fluctuation can be avoid...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2015